Although the appropriateness of applying quality criteria to qualitative research has been debated in the literature, it is generally accepted by those who conduct systematic reviews of qualitative evidence, as well as by qualitative synthesis methodologists, that qualitative research included in systematic reviews should undergo quality assessment. Indeed, critical appraisal is an essential feature of the JBI meta-aggregative approach to qualitative synthesis.1 Given the raison d'être of qualitative synthesis of health-related research is to inform health care practice or policy, low-quality studies should be excluded from the synthesis, as lack of rigor could impact the validity of the study findings and, hence, the confidence that can be placed in the trustworthiness of the synthesized findings.1 Despite the recommendation to exclude low-quality studies, it is apparent that many reviewers include all studies in their reviews, regardless of the assessed quality. In fact, in a recent cursory examination of a large number of published qualitative systematic reviews, I found that, for more than half of the reviews, the decision had been made to include all eligible studies (ie, those that met the inclusion criteria) irrespective of assessed quality. For the other reviews, the reviewers used critical appraisal cutoff scores (ie, a specific number of critical appraisal criteria for satisfactory study quality) or indicated particular criteria that must be met (or did not have to be met) for satisfactory study quality. In some reviews, the reviewers accepted appraisal findings if criteria fulfillment was unclear (ie, not clear that the criterion had been affirmatively met). Often, reviewers did not provide any rationale for their decisions to include or exclude studies on the basis of quality, with some decisions seemingly arbitrary. An absence of consensus in the literature on specific crucial criteria (ie, methodological limitations) and thresholds for excluding studies from qualitative synthesis on the basis of quality may help explain such decisions. However, some reviewers who had decided to include all studies in their systematic reviews, regardless of study quality, indicated various justifications for their decisions: participant voices were represented and valuable to the review objectives2; both high-quality and low-quality studies can generate insights for a richer understanding of the phenomenon of interest3; there were few studies available on the phenomenon of interest4; expectations for and limitations of reporting in publications may impact the information available for quality assessment5; and researcher cultural and theoretical orientation and researcher reflexivity are often not reported in research publications (which may be influenced by researcher epistemological assumptions; researchers with postpositivist or objectivist vs. constructivist assumptions may not be inclined to address their positionality and influence on the research).5,6 Review teams with which I have worked have made similar decisions with similar rationales regarding inclusion and exclusion of studies on the basis of critical appraisal. In one review,7 we included all studies regardless of assessed quality, as the number of studies found that were relevant to the phenomenon of interest was limited, and all studies had relevant findings with participant voice to inform an understanding of the phenomena being examined. In a second review,8 the review team determined that 2 criteria (using the JBI critical appraisal checklist for qualitative research9) were not essential to methodological quality (Is there congruity between the stated philosophical perspective and the research methodology? and Is there a statement locating the researcher culturally or theoretically?) and were, therefore, not used in determining study inclusion. We rationalized that information relevant to those criteria is often not provided in research reports. Studies were included in the review if ratings on all other 8 criteria were “yes” or “unclear” (ie, some evidence or support for the criterion was present but detail or full explication was missing). We made the decision to include studies with unclear assessment findings to ensure there was a sufficient number of studies, with relevant findings supported by participant voice, to address the review questions. However, even with this seemingly “lenient” approach to critical appraisal, a large number of studies were excluded from the review. Although exclusion of the studies, and potentially meaningful participant voice, was disconcerting, there was a large-enough number of relevant research findings across the included studies to provide informative syntheses. Nevertheless, in a subsequent review (published in this issue),10 we made the decision to include all eligible studies, no matter the appraised quality, to avoid potential loss of important findings and compelling participant voice. Critical appraisal of qualitative research is a complex issue, and making decisions about inclusion of studies in qualitative syntheses on the basis of study quality is a difficult and challenging process. ConQual scores (based on an assessment of dependability of primary studies and credibility of study findings) provide transparency about the impact of the inclusion of lesser-quality studies on the confidence that knowledge users may have in the synthesized findings.9 This may provide reviewers a level of comfort in their decisions about inclusion of studies with quality shortcomings. However, ConQual scores may inadvertently lead to synthesized findings presented with low confidence, limiting their potential to influence practice. Irrespective of this, methodological work is needed to establish whether there are critical appraisal criteria that may be deemed to be essential to study rigor. While acknowledging that decisions and the rationale for decisions may be context-specific to a particular review, additional guidance for reviewers about essential criteria may facilitate their decision-making and judgments about study inclusion on the basis of quality assessments, and will inevitably promote greater consistency in these judgments across reviews. Sandra P. Small Faculty of Nursing, Memorial University, St. John’s, NL, Canada; Memorial University Faculty of Nursing Collaboration for Evidence-Based Nursing and Primary Health Care: A JBI Affiliated Group, St. John’s, NL, Canada SPS is an associate editor of JBI Evidence Synthesis, but was not involved in the editorial processing of this manuscript. Correspondence: [email protected]